soft modularization
- North America > United States > California > San Diego County > San Diego (0.04)
- North America > Canada (0.04)
- Asia > Middle East > Jordan (0.04)
- (2 more...)
Multi-Task Reinforcement Learning with Soft Modularization
Multi-task learning is a very challenging problem in reinforcement learning. While training multiple tasks jointly allow the policies to share parameters across different tasks, the optimization problem becomes non-trivial: It remains unclear what parameters in the network should be reused across tasks, and how the gradients from different tasks may interfere with each other. Thus, instead of naively sharing parameters across tasks, we introduce an explicit modularization technique on policy representation to alleviate this optimization issue. Given a base policy network, we design a routing network which estimates different routing strategies to reconfigure the base network for each task. Instead of directly selecting routes for each task, our task-specific policy uses a method called soft modularization to softly combine all the possible routes, which makes it suitable for sequential tasks. We experiment with various robotics manipulation tasks in simulation and show our method improves both sample efficiency and performance over strong baselines by a large margin.
- North America > United States > California > San Diego County > San Diego (0.04)
- North America > Canada (0.04)
- Asia > Middle East > Jordan (0.04)
- (2 more...)
Review for NeurIPS paper: Multi-Task Reinforcement Learning with Soft Modularization
Summary and Contributions: In this article, the authors present a new method in the field of multi-task Reinforcement Learning. While the method is not restricted to a certain domain, they investigate the method in the experimental part in the application domain of manipulation, using an existing manipulation task benchmark suite (Meta-World). The main issues with multi-task RL that the authors motivate in the introduction and use to motivate their method are: conflicting gradients and balancing optimisation between tasks. They address important issues in multi-task RL that typically hurt the performance gain that we expect in terms of data efficiency and final performance, reported in all major publications in the field. From a high level perspective, there are two main ideas in the paper.
Review for NeurIPS paper: Multi-Task Reinforcement Learning with Soft Modularization
The reviewers agreed that this is a reasonably well-written paper, on an important topic, with excellent empirical results. Given that the level of enthusiasm varied widely across reviewers, I'd recommend revising the final paper for more clarity, especially with respect to the novelty of the ideas.
Multi-Task Reinforcement Learning with Soft Modularization
Multi-task learning is a very challenging problem in reinforcement learning. While training multiple tasks jointly allow the policies to share parameters across different tasks, the optimization problem becomes non-trivial: It remains unclear what parameters in the network should be reused across tasks, and how the gradients from different tasks may interfere with each other. Thus, instead of naively sharing parameters across tasks, we introduce an explicit modularization technique on policy representation to alleviate this optimization issue. Given a base policy network, we design a routing network which estimates different routing strategies to reconfigure the base network for each task. Instead of directly selecting routes for each task, our task-specific policy uses a method called soft modularization to softly combine all the possible routes, which makes it suitable for sequential tasks.
Multi-Task Reinforcement Learning with Soft Modularization
Yang, Ruihan, Xu, Huazhe, Wu, Yi, Wang, Xiaolong
Multi-task learning is a very challenging problem in reinforcement learning. While training multiple tasks jointly allow the policies to share parameters across different tasks, the optimization problem becomes non-trivial: It is unclear what parameters in the network should be reused across tasks, and the gradients from different tasks may interfere with each other. Thus, instead of naively sharing parameters across tasks, we introduce an explicit modularization technique on policy representation to alleviate this optimization issue. Given a base policy network, we design a routing network which estimates different routing strategies to reconfigure the base network for each task. Instead of creating a concrete route for each task, our task-specific policy is represented by a soft combination of all possible routes. We name this approach soft modularization. We experiment with multiple robotics manipulation tasks in simulation and show our method improves sample efficiency and performance over baselines by a large margin.